Right, good morning. As I assume you've been told, Professor Kohlhaus is not here this week,
so you have to make due with me for today. For tomorrow, that's not a problem, it's a
public holiday. I figure I mentioned that because if you're anything like me, you tend
to forget those kind of things. So do your grocery shopping tonight, I guess. Great.
So let's recap what you should have done last week, I assume. Hidden Markov models, right?
So the basic setting, which will kind of be a continuous theme for the next couple of
lectures, is we have some discrete variable X, which is basically just a representation
of a bunch of states or something that we're interested in, and we can make observations.
We can describe the transition model between the different states as a single matrix, basically,
which just encodes the probabilities of getting from one state to the next one. The classic
example you've been doing with this kind of security guard in some bunker, I think, whose
only observation is basically whether the administrator, I think, has an umbrella or
not and tries to guess whether it's raining outside. It's a bunker, so there are no windows.
Taking that as an example, you get this matrix here with like O7 from the case of that it
actually matches the actual weather outside, whether he has an umbrella or not, and 0.3
in the other cases. We can encode all of this as matrices, which is nice because matrices
give us all the tools that linear algebra offers, which is nice because those tend to
be rather efficient. We take one giant matrix for a transition model. We take one matrix
for our sensors, in this case just the observation whether there is an umbrella or no. That gives
us these two equations, one for filtering forward, one for smoothing backward, which now is only
basically matrix algebra, which is nice. If you try to take the classic forward-backward
algorithm for this, you get basically quadratic time and linear space with respect to S and
linearity with respect to T in both cases. I'm going to rush over this a bit because
you've done all of this last week, I guess. Yes?
Could you turn off the lights?
Yeah, if I can figure out how. This one maybe? Is that better? No, they're still on.
I think in the corner. There are more buttons. Did that do something?
Well, the lights are off, but it didn't do anything. I don't know, we can dim that, right?
Yes, I think there are two buttons.
No, that's not it. Those are the only buttons available though. Sorry about that. Let's
look at an example. Assume you're all working for NASA for some reason, sitting inside their
control center in wherever they're actually located. You're responsible for, let's say,
monitoring the Curiosity rover. Let's assume this rover only has four actual sensors, which
only tells them if there is an obstacle in the north, south, east, or west direction.
Whatever you're reading here, just mentally substitute E by W. I think Professor Kolhase
got confused between the two. Let's assume this were an actual map of Mars. Looks Marsy
enough I guess. Let's assume that the feedback we get from our sensors is that there are
obstacles north, south, and west. Then you can, looking at that map, figure out that
if that's the case, then the robot has to be in one of those four possible locations.
So far we're always assuming that all of our sensors are perfect. We'll change that later.
This is the locations that the robot could possibly be in this situation. Now if we add
an additional observation, assuming that in the second time stack I get the information
that now there's an obstacle in north and south, then we can filter out all the impossible
states and figure out, okay, now the rover has to be in that precise square. Consequently,
it had to have been in the square on the top left most square. So taking that as a running
example now for what we can do. So how do we do all of that? Well, we take a random
variable for the location which we're interested in. In this case, it just happens to be 42
squares that the robot could possibly be in. Not sure if the 42 is a coincidence. I'm pretty
sure it's not. And we can make a transition matrix for all of the possible moves that
the rover can do. In this case, what does that mean? Well, remember our transition matrix
Presenters
Zugänglich über
Offener Zugang
Dauer
01:26:09 Min
Aufnahmedatum
2018-05-30
Hochgeladen am
2018-05-31 10:31:05
Sprache
en-US
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter.
Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz
-
Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.
-
Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).
-
Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.